2,230 research outputs found

    A Methodology for Discovering how to Adaptively Personalize to Users using Experimental Comparisons

    Full text link
    We explain and provide examples of a formalism that supports the methodology of discovering how to adapt and personalize technology by combining randomized experiments with variables associated with user models. We characterize a formal relationship between the use of technology to conduct A/B experiments and use of technology for adaptive personalization. The MOOClet Formalism [11] captures the equivalence between experimentation and personalization in its conceptualization of modular components of a technology. This motivates a unified software design pattern that enables technology components that can be compared in an experiment to also be adapted based on contextual data, or personalized based on user characteristics. With the aid of a concrete use case, we illustrate the potential of the MOOClet formalism for a methodology that uses randomized experiments of alternative micro-designs to discover how to adapt technology based on user characteristics, and then dynamically implements these personalized improvements in real time

    Supporting Instructors in Collaborating with Researchers using MOOClets

    Full text link
    Most education and workplace learning takes place in classroom contexts far removed from laboratories or field sites with special arrangements for scientific research. But digital online resources provide a novel opportunity for large scale efforts to bridge the real world and laboratory settings which support data collection and randomized A/B experiments comparing different versions of content or interactions [2]. However, there are substantial technological and practical barriers in aligning instructors and researchers to use learning technologies like blended lessons/exercises & MOOCs as both a service for students and a realistic context to conduct research. This paper explains how the concept of a MOOClet can facilitate research-practitioner collaborations. MOOClets [3] are defined as modular components of a digital resource that can be implemented in technology to: (1) allow modification to create multiple versions, (2) allow experimental comparison and personalization of different versions, (3) reliably specify what data are collected. We suggest a framework in which instructors specify what kinds of changes to lessons, exercises, and emails they would be willing to adopt, and what data they will collect and make available. Researchers can then: (1) specify or design experiments that compare the effects of different versions on quantifiable outcomes. (2) Explore algorithms for maximizing particular outcomes by choosing alternative versions of a MOOClet based on the input variables available. We present a prototype survey tool for instructors intended to facilitate practitioner researcher matches and successful collaborations.Comment: 4 page

    Student Usage of Q&A Forums: Signs of Discomfort?

    Full text link
    Q&A forums are widely used in large classes to provide scalable support. In addition to offering students a space to ask questions, these forums aim to create a community and promote engagement. Prior literature suggests that the way students participate in Q&A forums varies and that most students do not actively post questions or engage in discussions. Students may display different participation behaviours depending on their comfort levels in the class. This paper investigates students' use of a Q&A forum in a CS1 course. We also analyze student opinions about the forum to explain the observed behaviour, focusing on students' lack of visible participation (lurking, anonymity, private posting). We analyzed forum data collected in a CS1 course across two consecutive years and invited students to complete a survey about perspectives on their forum usage. Despite a small cohort of highly engaged students, we confirmed that most students do not actively read or post on the forum. We discuss students' reasons for the low level of engagement and barriers to participating visibly. Common reasons include fearing a lack of knowledge and repercussions from being visible to the student community.Comment: To be published at ITiCSE 202

    Opportunities for Adaptive Experiments to Enable Continuous Improvement that Trades-off Instructor and Researcher Incentives

    Full text link
    Randomized experimental comparisons of alternative pedagogical strategies could provide useful empirical evidence in instructors' decision-making. However, traditional experiments do not have a clear and simple pathway to using data rapidly to try to increase the chances that students in an experiment get the best conditions. Drawing inspiration from the use of machine learning and experimentation in product development at leading technology companies, we explore how adaptive experimentation might help in continuous course improvement. In adaptive experiments, as different arms/conditions are deployed to students, data is analyzed and used to change the experience for future students. This can be done using machine learning algorithms to identify which actions are more promising for improving student experience or outcomes. This algorithm can then dynamically deploy the most effective conditions to future students, resulting in better support for students' needs. We illustrate the approach with a case study providing a side-by-side comparison of traditional and adaptive experimentation of self-explanation prompts in online homework problems in a CS1 course. This provides a first step in exploring the future of how this methodology can be useful in bridging research and practice in doing continuous improvement

    Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning

    Full text link
    We design and implement an adaptive experiment (a ``contextual bandit'') to learn a targeted treatment assignment policy, where the goal is to use a participant's survey responses to determine which charity to expose them to in a donation solicitation. The design balances two competing objectives: optimizing the outcomes for the subjects in the experiment (``cumulative regret minimization'') and gathering data that will be most useful for policy learning, that is, for learning an assignment rule that will maximize welfare if used after the experiment (``simple regret minimization''). We evaluate alternative experimental designs by collecting pilot data and then conducting a simulation study. Next, we implement our selected algorithm. Finally, we perform a second simulation study anchored to the collected data that evaluates the benefits of the algorithm we chose. Our first result is that the value of a learned policy in this setting is higher when data is collected via a uniform randomization rather than collected adaptively using standard cumulative regret minimization or policy learning algorithms. We propose a simple heuristic for adaptive experimentation that improves upon uniform randomization from the perspective of policy learning at the expense of increasing cumulative regret relative to alternative bandit algorithms. The heuristic modifies an existing contextual bandit algorithm by (i) imposing a lower bound on assignment probabilities that decay slowly so that no arm is discarded too quickly, and (ii) after adaptively collecting data, restricting policy learning to select from arms where sufficient data has been gathered

    Getting too personal(ized): The importance of feature choice in online adaptive algorithms

    Full text link
    Digital educational technologies offer the potential to customize students' experiences and learn what works for which students, enhancing the technology as more students interact with it. We consider whether and when attempting to discover how to personalize has a cost, such as if the adaptation to personal information can delay the adoption of policies that benefit all students. We explore these issues in the context of using multi-armed bandit (MAB) algorithms to learn a policy for what version of an educational technology to present to each student, varying the relation between student characteristics and outcomes and also whether the algorithm is aware of these characteristics. Through simulations, we demonstrate that the inclusion of student characteristics for personalization can be beneficial when those characteristics are needed to learn the optimal action. In other scenarios, this inclusion decreases performance of the bandit algorithm. Moreover, including unneeded student characteristics can systematically disadvantage students with less common values for these characteristics. Our simulations do however suggest that real-time personalization will be helpful in particular real-world scenarios, and we illustrate this through case studies using existing experimental results in ASSISTments. Overall, our simulations show that adaptive personalization in educational technologies can be a double-edged sword: real-time adaptation improves student experiences in some contexts, but the slower adaptation and potentially discriminatory results mean that a more personalized model is not always beneficial.Comment: 11 pages, 6 figures. Correction to the original article published at https://files.eric.ed.gov/fulltext/ED607907.pdf : The Thompson sampling algorithm in the original article overweights older data resulting in an overexploitative multi-armed bandit. This arxiv version uses a normal Thompson sampling algorith

    Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception

    Full text link
    Personalized chatbot-based teaching assistants can be crucial in addressing increasing classroom sizes, especially where direct teacher presence is limited. Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility. However, the challenge lies not only in establishing the efficacy of LLMs but also in discerning the nuances of interaction between learners and these models, which impact learners' engagement and results. We conducted a formative study in an undergraduate computer science classroom (N=145) and a controlled experiment on Prolific (N=356) to explore the impact of four pedagogically informed guidance strategies and the interaction between student approaches and LLM responses. Direct LLM answers marginally improved performance, while refining student solutions fostered trust. Our findings suggest a nuanced relationship between the guidance provided and LLM's role in either answering or refining student input. Based on our findings, we provide design recommendations for optimizing learner-LLM interactions
    corecore